Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Weakly supervised fine-grained classification method of Alzheimer’s disease based on improved visual geometry group network
Shuang DENG, Xiaohai HE, Linbo QING, Honggang CHEN, Qizhi TENG
Journal of Computer Applications    2022, 42 (1): 302-309.   DOI: 10.11772/j.issn.1001-9081.2021020258
Abstract483)   HTML14)    PDF (868KB)(222)       Save

In order to solve the problems of small difference of Magnetic Resonance Imaging (MRI) images between Alzheimer’s Disease (AD) patients and Normal Control (NC) people and great difficulty in classification of them, a weakly supervised fine-grained classification method for AD based on improved Visual Geometry Group (VGG) network was proposed. In this method, Weakly Supervised Data Augmentation Network (WSDAN) was took as the basic model, which was mainly composed of weakly supervised attention learning module, data augmentation module and bilinear attention pooling module. Firstly, the feature map and the attention map were generated through weakly supervised attention learning network, and the attention map was used to guide the data augmentation. Both the original image and the augmented data were used as the input data for training. Then, point production between the feature map and the attention map was performed by elements via bilinear attention pooling algorithm to obtain the feature matrix. Finally, the feature matrix was used as the input of the linear classification layer. Experimental results of applying WSDAN basic model with VGG19 as feature extraction network on MRI data of AD show that, compared with the WSDAN basic model, the proposed model only with image enhancement has the accuracy, sensitivity and specificity increased by 1.6 percentage points, 0.34 percentage points and 0.12 percentage points respectively; the model only using the improvement of VGG19 network has the accuracy and specificity improved by 0.7 percentage points and 2.82 percentage points respectively; the model combing the two methods above has the accuracy, sensitivity and specificity improved by 2.1 percentage points, 1.91 percentage points and 2.19 percentage points respectively.

Table and Figures | Reference | Related Articles | Metrics